Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
BMJ Glob Health ; 8(5)2023 05.
Article in English | MEDLINE | ID: covidwho-20244705

ABSTRACT

BACKGROUND: The COVID-19 pandemic required science to provide answers rapidly to combat the outbreak. Hence, the reproducibility and quality of conducting research may have been threatened, particularly regarding privacy and data protection, in varying ways around the globe. The objective was to investigate aspects of reporting informed consent and data handling as proxies for study quality conduct. METHODS: A systematic scoping review was performed by searching PubMed and Embase. The search was performed on November 8th, 2020. Studies with hospitalised patients diagnosed with COVID-19 over 18 years old were eligible for inclusion. With a focus on informed consent, data were extracted on the study design, prestudy protocol registration, ethical approval, data anonymisation, data sharing and data transfer as proxies for study quality. For reasons of comparison, data regarding country income level, study location and journal impact factor were also collected. RESULTS: 972 studies were included. 21.3% of studies reported informed consent, 42.6% reported waivers of consent, 31.4% did not report consent information and 4.7% mentioned other types of consent. Informed consent reporting was highest in clinical trials (94.6%) and lowest in retrospective cohort studies (15.0%). The reporting of consent versus no consent did not differ significantly by journal impact factor (p=0.159). 16.8% of studies reported a prestudy protocol registration or design. Ethical approval was described in 90.9% of studies. Information on anonymisation was provided in 17.0% of studies. In 257 multicentre studies, 1.2% reported on data sharing agreements, and none reported on Findable, Accessible, Interoperable and Reusable data principles. 1.2% reported on open data. Consent was most often reported in the Middle East (42.4%) and least often in North America (4.7%). Only one report originated from a low-income country. DISCUSSION: Informed consent and aspects of data handling and sharing were under-reported in publications concerning COVID-19 and differed between countries, which strains study quality conduct when in dire need of answers.


Subject(s)
COVID-19 , Pandemics , Humans , Adolescent , Retrospective Studies , Reproducibility of Results , Informed Consent
2.
J Clin Epidemiol ; 154: 75-84, 2023 02.
Article in English | MEDLINE | ID: covidwho-2241601

ABSTRACT

OBJECTIVES: To assess improvement in the completeness of reporting coronavirus (COVID-19) prediction models after the peer review process. STUDY DESIGN AND SETTING: Studies included in a living systematic review of COVID-19 prediction models, with both preprint and peer-reviewed published versions available, were assessed. The primary outcome was the change in percentage adherence to the transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD) reporting guidelines between pre-print and published manuscripts. RESULTS: Nineteen studies were identified including seven (37%) model development studies, two external validations of existing models (11%), and 10 (53%) papers reporting on both development and external validation of the same model. Median percentage adherence among preprint versions was 33% (min-max: 10 to 68%). The percentage adherence of TRIPOD components increased from preprint to publication in 11/19 studies (58%), with adherence unchanged in the remaining eight studies. The median change in adherence was just 3 percentage points (pp, min-max: 0-14 pp) across all studies. No association was observed between the change in percentage adherence and preprint score, journal impact factor, or time between journal submission and acceptance. CONCLUSIONS: The preprint reporting quality of COVID-19 prediction modeling studies is poor and did not improve much after peer review, suggesting peer review had a trivial effect on the completeness of reporting during the pandemic.


Subject(s)
COVID-19 , Humans , COVID-19/epidemiology , Prognosis , Pandemics
3.
J Clin Epidemiol ; 152: 257-268, 2022 Oct 27.
Article in English | MEDLINE | ID: covidwho-2086388

ABSTRACT

OBJECTIVES: Many prediction models for coronavirus disease 2019 (COVID-19) have been developed. External validation is mandatory before implementation in the intensive care unit (ICU). We selected and validated prognostic models in the Euregio Intensive Care COVID (EICC) cohort. STUDY DESIGN AND SETTING: In this multinational cohort study, routine data from COVID-19 patients admitted to ICUs within the Euregio Meuse-Rhine were collected from March to August 2020. COVID-19 models were selected based on model type, predictors, outcomes, and reporting. Furthermore, general ICU scores were assessed. Discrimination was assessed by area under the receiver operating characteristic curves (AUCs) and calibration by calibration-in-the-large and calibration plots. A random-effects meta-analysis was used to pool results. RESULTS: 551 patients were admitted. Mean age was 65.4 ± 11.2 years, 29% were female, and ICU mortality was 36%. Nine out of 238 published models were externally validated. Pooled AUCs were between 0.53 and 0.70 and calibration-in-the-large between -9% and 6%. Calibration plots showed generally poor but, for the 4C Mortality score and Spanish Society of Infectious Diseases and Clinical Microbiology (SEIMC) score, moderate calibration. CONCLUSION: Of the nine prognostic models that were externally validated in the EICC cohort, only two showed reasonable discrimination and moderate calibration. For future pandemics, better models based on routine data are needed to support admission decision-making.

4.
BMJ ; 378: e069881, 2022 07 12.
Article in English | MEDLINE | ID: covidwho-1932661

ABSTRACT

OBJECTIVE: To externally validate various prognostic models and scoring rules for predicting short term mortality in patients admitted to hospital for covid-19. DESIGN: Two stage individual participant data meta-analysis. SETTING: Secondary and tertiary care. PARTICIPANTS: 46 914 patients across 18 countries, admitted to a hospital with polymerase chain reaction confirmed covid-19 from November 2019 to April 2021. DATA SOURCES: Multiple (clustered) cohorts in Brazil, Belgium, China, Czech Republic, Egypt, France, Iran, Israel, Italy, Mexico, Netherlands, Portugal, Russia, Saudi Arabia, Spain, Sweden, United Kingdom, and United States previously identified by a living systematic review of covid-19 prediction models published in The BMJ, and through PROSPERO, reference checking, and expert knowledge. MODEL SELECTION AND ELIGIBILITY CRITERIA: Prognostic models identified by the living systematic review and through contacting experts. A priori models were excluded that had a high risk of bias in the participant domain of PROBAST (prediction model study risk of bias assessment tool) or for which the applicability was deemed poor. METHODS: Eight prognostic models with diverse predictors were identified and validated. A two stage individual participant data meta-analysis was performed of the estimated model concordance (C) statistic, calibration slope, calibration-in-the-large, and observed to expected ratio (O:E) across the included clusters. MAIN OUTCOME MEASURES: 30 day mortality or in-hospital mortality. RESULTS: Datasets included 27 clusters from 18 different countries and contained data on 46 914patients. The pooled estimates ranged from 0.67 to 0.80 (C statistic), 0.22 to 1.22 (calibration slope), and 0.18 to 2.59 (O:E ratio) and were prone to substantial between study heterogeneity. The 4C Mortality Score by Knight et al (pooled C statistic 0.80, 95% confidence interval 0.75 to 0.84, 95% prediction interval 0.72 to 0.86) and clinical model by Wang et al (0.77, 0.73 to 0.80, 0.63 to 0.87) had the highest discriminative ability. On average, 29% fewer deaths were observed than predicted by the 4C Mortality Score (pooled O:E 0.71, 95% confidence interval 0.45 to 1.11, 95% prediction interval 0.21 to 2.39), 35% fewer than predicted by the Wang clinical model (0.65, 0.52 to 0.82, 0.23 to 1.89), and 4% fewer than predicted by Xie et al's model (0.96, 0.59 to 1.55, 0.21 to 4.28). CONCLUSION: The prognostic value of the included models varied greatly between the data sources. Although the Knight 4C Mortality Score and Wang clinical model appeared most promising, recalibration (intercept and slope updates) is needed before implementation in routine care.


Subject(s)
COVID-19 , Models, Statistical , Data Analysis , Hospital Mortality , Humans , Prognosis
5.
J Clin Epidemiol ; 138: 219-226, 2021 10.
Article in English | MEDLINE | ID: covidwho-1253151

ABSTRACT

Covid-19 research made it painfully clear that the scandal of poor medical research, as denounced by Altman in 1994, persists today. The overall quality of medical research remains poor, despite longstanding criticisms. The problems are well known, but the research community fails to properly address them. We suggest that most problems stem from an underlying paradox: although methodology is undeniably the backbone of high-quality and responsible research, science consistently undervalues methodology. The focus remains more on the destination (research claims and metrics) than on the journey. Notwithstanding, research should serve society more than the reputation of those involved. While we notice that many initiatives are being established to improve components of the research cycle, these initiatives are too disjointed. The overall system is monolithic and slow to adapt. We assert that top-down action is needed from journals, universities, funders and governments to break the cycle and put methodology first. These actions should involve the widespread adoption of registered reports, balanced research funding between innovative, incremental and methodological research projects, full recognition and demystification of peer review, improved methodological review of reports, adherence to reporting guidelines, and investment in methodological education and research. Currently, the scientific enterprise is doing a major disservice to patients and society.


Subject(s)
Biomedical Research/methods , Biomedical Research/standards , Research Design/standards , COVID-19/epidemiology , COVID-19/prevention & control , Humans
6.
Lancet Respir Med ; 9(4): 320-321, 2021 04.
Article in English | MEDLINE | ID: covidwho-1180131

Subject(s)
COVID-19 , Humans , Prognosis , SARS-CoV-2
7.
BMJ ; 369: m1328, 2020 04 07.
Article in English | MEDLINE | ID: covidwho-648504

ABSTRACT

OBJECTIVE: To review and appraise the validity and usefulness of published and preprint reports of prediction models for diagnosing coronavirus disease 2019 (covid-19) in patients with suspected infection, for prognosis of patients with covid-19, and for detecting people in the general population at increased risk of covid-19 infection or being admitted to hospital with the disease. DESIGN: Living systematic review and critical appraisal by the COVID-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group. DATA SOURCES: PubMed and Embase through Ovid, up to 1 July 2020, supplemented with arXiv, medRxiv, and bioRxiv up to 5 May 2020. STUDY SELECTION: Studies that developed or validated a multivariable covid-19 related prediction model. DATA EXTRACTION: At least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool). RESULTS: 37 421 titles were screened, and 169 studies describing 232 prediction models were included. The review identified seven models for identifying people at risk in the general population; 118 diagnostic models for detecting covid-19 (75 were based on medical imaging, 10 to diagnose disease severity); and 107 prognostic models for predicting mortality risk, progression to severe disease, intensive care unit admission, ventilation, intubation, or length of hospital stay. The most frequent types of predictors included in the covid-19 prediction models are vital signs, age, comorbidities, and image features. Flu-like symptoms are frequently predictive in diagnostic models, while sex, C reactive protein, and lymphocyte counts are frequent prognostic factors. Reported C index estimates from the strongest form of validation available per model ranged from 0.71 to 0.99 in prediction models for the general population, from 0.65 to more than 0.99 in diagnostic models, and from 0.54 to 0.99 in prognostic models. All models were rated at high or unclear risk of bias, mostly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, high risk of model overfitting, and unclear reporting. Many models did not include a description of the target population (n=27, 12%) or care setting (n=75, 32%), and only 11 (5%) were externally validated by a calibration plot. The Jehi diagnostic model and the 4C mortality score were identified as promising models. CONCLUSION: Prediction models for covid-19 are quickly entering the academic literature to support medical decision making at a time when they are urgently needed. This review indicates that almost all pubished prediction models are poorly reported, and at high risk of bias such that their reported predictive performance is probably optimistic. However, we have identified two (one diagnostic and one prognostic) promising models that should soon be validated in multiple cohorts, preferably through collaborative efforts and data sharing to also allow an investigation of the stability and heterogeneity in their performance across populations and settings. Details on all reviewed models are publicly available at https://www.covprecise.org/. Methodological guidance as provided in this paper should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, prediction model authors should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline. SYSTEMATIC REVIEW REGISTRATION: Protocol https://osf.io/ehc47/, registration https://osf.io/wy245. READERS' NOTE: This article is a living systematic review that will be updated to reflect emerging evidence. Updates may occur for up to two years from the date of original publication. This version is update 3 of the original article published on 7 April 2020 (BMJ 2020;369:m1328). Previous updates can be found as data supplements (https://www.bmj.com/content/369/bmj.m1328/related#datasupp). When citing this paper please consider adding the update number and date of access for clarity.


Subject(s)
Coronavirus Infections/diagnosis , Models, Theoretical , Pneumonia, Viral/diagnosis , COVID-19 , Coronavirus , Disease Progression , Hospitalization/statistics & numerical data , Humans , Multivariate Analysis , Pandemics , Prognosis
SELECTION OF CITATIONS
SEARCH DETAIL